# Large Language Models
Longalpaca 70B
LongLoRA is an efficient fine-tuning technique for large language models with long context processing capabilities, achieving this through shifted short attention mechanisms, supporting context lengths from 8k to 100k.
Large Language Model
Transformers

L
Yukang
1,293
21
Mathcoder L 7B
Apache-2.0
The MathCoder series of open-source large language models, specifically tailored for general mathematical problem solving, fine-tuned based on Llama-2 and Code Llama.
Large Language Model
Transformers English

M
MathLLMs
127
18
Wizardcoder Python 13B V1.0
WizardCoder is a large language model for code enhanced by the Evol-Instruct method, specializing in code generation tasks.
Large Language Model
Transformers Other

W
WizardLMTeam
681
106
Wizardmath 70B V1.0
WizardMath-7B-V1.1 is a large language model trained on Mistral-7B, specializing in mathematical reasoning. It employs the Reinforced Evolutionary Instruction (RLEIF) method to enhance performance, currently achieving SOTA among 7B-scale mathematical models.
Large Language Model
Transformers

W
WizardLMTeam
153
119
Featured Recommended AI Models